171 research outputs found

    Incorporating Prediction in Models for Two-Dimensional Smooth Pursuit

    Get PDF
    A predictive component can contribute to the command signal for smooth pursuit. This is readily demonstrated by the fact that low frequency sinusoidal target motion can be tracked with zero time delay or even with a small lead. The objective of this study was to characterize the predictive contributions to pursuit tracking more precisely by developing analytical models for predictive smooth pursuit. Subjects tracked a small target moving in two dimensions. In the simplest case, the periodic target motion was composed of the sums of two sinusoidal motions (SS), along both the horizontal and the vertical axes. Motions following the same or similar paths, but having a richer spectral composition, were produced by having the target follow the same path but at a constant speed (CS), and by combining the horizontal SS velocity with the vertical CS velocity and vice versa. Several different quantitative models were evaluated. The predictive contribution to the eye tracking command signal could be modeled as a low-pass filtered target acceleration signal with a time delay. This predictive signal, when combined with retinal image velocity at the same time delay, as in classical models for the initiation of pursuit, gave a good fit to the data. The weighting of the predictive acceleration component was different in different experimental conditions, being largest when target motion was simplest, following the SS velocity profiles

    Eye-Hand Coordination during Dynamic Visuomotor Rotations

    Get PDF
    Background for many technology-driven visuomotor tasks such as tele-surgery, human operators face situations in which the frames of reference for vision and action are misaligned and need to be compensated in order to perform the tasks with the necessary precision. The cognitive mechanisms for the selection of appropriate frames of reference are still not fully understood. This study investigated the effect of changing visual and kinesthetic frames of reference during wrist pointing, simulating activities typical for tele-operations. Methods using a robotic manipulandum, subjects had to perform center-out pointing movements to visual targets presented on a computer screen, by coordinating wrist flexion/extension with abduction/adduction. We compared movements in which the frames of reference were aligned (unperturbed condition) with movements performed under different combinations of visual/kinesthetic dynamic perturbations. The visual frame of reference was centered to the computer screen, while the kinesthetic frame was centered around the wrist joint. Both frames changed their orientation dynamically (angular velocity\u200a=\u200a36\ub0/s) with respect to the head-centered frame of reference (the eyes). Perturbations were either unimodal (visual or kinesthetic), or bimodal (visual+kinesthetic). As expected, pointing performance was best in the unperturbed condition. The spatial pointing error dramatically worsened during both unimodal and most bimodal conditions. However, in the bimodal condition, in which both disturbances were in phase, adaptation was very fast and kinematic performance indicators approached the values of the unperturbed condition. Conclusions this result suggests that subjects learned to exploit an \u201caffordance\u201d made available by the invariant phase relation between the visual and kinesthetic frames. It seems that after detecting such invariance, subjects used the kinesthetic input as an informative signal rather than a disturbance, in order to compensate the visual rotation without going through the lengthy process of building an internal adaptation model. Practical implications are discussed as regards the design of advanced, high-performance man-machine interfaces

    The human arm as a redundant manipulator: the control of path and joint angles

    Get PDF
    Cruse H, Brüwer M. The human arm as a redundant manipulator: the control of path and joint angles. Biological cybernetics. 1987;57(1-2):137-144.The movements studied involved moving the tip of a pointer attached to the hand from a given starting point to a given end point in a horizontal plane. Three joints — the shoulder, elbow and wrist —were free to move. Thus the system represented a redundant manipulator. The coordination of the movements of the three joints was recorded and analyzed. The study concerned how the joints are controlled during a movement. The results are used to evaluate several current hypotheses for motor control. Basically, the incremental changes are calculated so as to move the tip of the manipulator along a straight line in the workspace. The values of the individual joints seem to be determined as follows. Starting from the initial values the incremental changes in the three joint angles represent a compromise between two criteria: 1) the amount of the angular change should be about the same in the three joints, and 2) the angular changes should minimize the total cost of the arm position as determined by cost functions defined for each joint as a function of angle. By itself, this mechanism would produce strongly curved trajectories in joint space which could include additional acceleration and deceleration in a joint. These are reduced by the influence of a third criterion which fits with the mass-spring hypothesis. Thus the path is calculated as a compromise between a straight line in workspace and a straight line in joint space. The latter can produce curved paths in the workspace such as were actually found in the experiments. A model calculation shows that these hypotheses can qualitatively describe the experimental findings

    The Inactivation Principle: Mathematical Solutions Minimizing the Absolute Work and Biological Implications for the Planning of Arm Movements

    Get PDF
    An important question in the literature focusing on motor control is to determine which laws drive biological limb movements. This question has prompted numerous investigations analyzing arm movements in both humans and monkeys. Many theories assume that among all possible movements the one actually performed satisfies an optimality criterion. In the framework of optimal control theory, a first approach is to choose a cost function and test whether the proposed model fits with experimental data. A second approach (generally considered as the more difficult) is to infer the cost function from behavioral data. The cost proposed here includes a term called the absolute work of forces, reflecting the mechanical energy expenditure. Contrary to most investigations studying optimality principles of arm movements, this model has the particularity of using a cost function that is not smooth. First, a mathematical theory related to both direct and inverse optimal control approaches is presented. The first theoretical result is the Inactivation Principle, according to which minimizing a term similar to the absolute work implies simultaneous inactivation of agonistic and antagonistic muscles acting on a single joint, near the time of peak velocity. The second theoretical result is that, conversely, the presence of non-smoothness in the cost function is a necessary condition for the existence of such inactivation. Second, during an experimental study, participants were asked to perform fast vertical arm movements with one, two, and three degrees of freedom. Observed trajectories, velocity profiles, and final postures were accurately simulated by the model. In accordance, electromyographic signals showed brief simultaneous inactivation of opposing muscles during movements. Thus, assuming that human movements are optimal with respect to a certain integral cost, the minimization of an absolute-work-like cost is supported by experimental observations. Such types of optimality criteria may be applied to a large range of biological movements

    Grasping Objects with Environmentally Induced Position Uncertainty

    Get PDF
    Due to noisy motor commands and imprecise and ambiguous sensory information, there is often substantial uncertainty about the relative location between our body and objects in the environment. Little is known about how well people manage and compensate for this uncertainty in purposive movement tasks like grasping. Grasping objects requires reach trajectories to generate object-fingers contacts that permit stable lifting. For objects with position uncertainty, some trajectories are more efficient than others in terms of the probability of producing stable grasps. We hypothesize that people attempt to generate efficient grasp trajectories that produce stable grasps at first contact without requiring post-contact adjustments. We tested this hypothesis by comparing human uncertainty compensation in grasping objects against optimal predictions. Participants grasped and lifted a cylindrical object with position uncertainty, introduced by moving the cylinder with a robotic arm over a sequence of 5 positions sampled from a strongly oriented 2D Gaussian distribution. Preceding each reach, vision of the object was removed for the remainder of the trial and the cylinder was moved one additional time. In accord with optimal predictions, we found that people compensate by aligning the approach direction with covariance angle to maintain grasp efficiency. This compensation results in higher probability to achieve stable grasps at first contact than non-compensation strategies in grasping objects with directional position uncertainty, and the results provide the first demonstration that humans compensate for uncertainty in a complex purposive task

    Affine differential geometry analysis of human arm movements

    Get PDF
    Humans interact with their environment through sensory information and motor actions. These interactions may be understood via the underlying geometry of both perception and action. While the motor space is typically considered by default to be Euclidean, persistent behavioral observations point to a different underlying geometric structure. These observed regularities include the “two-thirds power law” which connects path curvature with velocity, and “local isochrony” which prescribes the relation between movement time and its extent. Starting with these empirical observations, we have developed a mathematical framework based on differential geometry, Lie group theory and Cartan’s moving frame method for the analysis of human hand trajectories. We also use this method to identify possible motion primitives, i.e., elementary building blocks from which more complicated movements are constructed. We show that a natural geometric description of continuous repetitive hand trajectories is not Euclidean but equi-affine. Specifically, equi-affine velocity is piecewise constant along movement segments, and movement execution time for a given segment is proportional to its equi-affine arc-length. Using this mathematical framework, we then analyze experimentally recorded drawing movements. To examine movement segmentation and classification, the two fundamental equi-affine differential invariants—equi-affine arc-length and curvature are calculated for the recorded movements. We also discuss the possible role of conic sections, i.e., curves with constant equi-affine curvature, as motor primitives and focus in more detail on parabolas, the equi-affine geodesics. Finally, we explore possible schemes for the internal neural coding of motor commands by showing that the equi-affine framework is compatible with the common model of population coding of the hand velocity vector when combined with a simple assumption on its dynamics. We then discuss several alternative explanations for the role that the equi-affine metric may play in internal representations of motion perception and production

    Evidence for Composite Cost Functions in Arm Movement Planning: An Inverse Optimal Control Approach

    Get PDF
    An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness

    Posture of the arm when grasping spheres to place them elsewhere

    Get PDF
    Despite the infinitely many ways to grasp a spherical object, regularities have been observed in the posture of the arm and the grasp orientation. In the present study, we set out to determine the factors that predict the grasp orientation and the final joint angles of reach-tograsp movements. Subjects made reach-to-grasp movements toward a sphere to pick it up and place it at an indicated location. We varied the position of the sphere and the starting and placing positions. Multiple regression analysis showed that the sphere's azimuth from the subject was the best predictor of grasp orientation, although there were also smaller but reliable contributions of distance, starting position, and perhaps even placing position. The sphere's initial distance from the subject was the best predictor of the final elbow angle and shoulder elevation. A combination of the sphere's azimuth and distance from the subject was required to predict shoulder angle, trunkhead rotation, and lateral head position. The starting position best predicted the final wrist angle and sagittal head position. We conclude that the final posture of the arm when grasping a sphere to place it elsewhere is determined to a larger extend by the initial position of the object than by effects of starting and placing position. © 2010 Springer-Verlag

    Fast and fine-tuned corrections when the target of a hand movement is displaced

    Get PDF
    To study the strategy in responding to target displacements during fast goal-directed arm movements, we examined how quickly corrections are initiated and how vigorously they are executed. We perturbed the target position at various moments before and after movement initiation. Corrections to perturbations before the movement started were initiated with the same latency as corrections to perturbations during the movement. Subjects also responded as quickly to a second perturbation during the same reach, even if the perturbations were only separated by 60 ms. The magnitude of the correction was minimized with respect to the time remaining until the end of the movement. We conclude that despite being executed after a fixed latency, these fast corrections are not stereotyped responses but are suited to the circumstances

    Comparison of the haptic and visual deviations in a parallelity task

    Get PDF
    Deviations in both haptic and visual spatial experiments are thought to be caused by a biasing influence of an egocentric reference frame. The strength of this influence is strongly participant-dependent. By using a parallelity test, it is studied whether this strength is modality-independent. In both haptic and visual conditions, large, systematic and participant-dependent deviations were found. However, although the correlation between the haptic and visual deviations was significant, the explained variance due to a common factor was only 20%. Therefore, the degree to which a participant is “egocentric” depends on modality and possibly even more generally, on experimental condition
    corecore